週次 |
日期 |
單元主題 |
Week 1 |
2/18 |
Introduction: Biology, why model, and general approach. |
Week 2 |
2/25 |
Linear algebra: Vectors, matrices, and matrix operations; reading Jordan (1986). [HW 1] |
Week 3 |
3/4 |
Perceptrons: Nomenclature, general neural network framework, application in logic problems; reading Aggarwal (2018), Ch. 1. [HW 2] |
Week 4 |
3/11 |
Attractor networks 1: Introduction to the principles of autoencoding and memory; reading Hopfield (1982). |
Week 5 |
3/18 |
Attractor networks 2: Simple autoencoder architecture and learning rule to instantiate content-addressable memory, attractor properties. |
Week 6 |
3/25 |
Attractor networks 3: Evaluating and describing the autoencoder model. [Assignment 1] |
Week 7 |
4/1 |
Backpropagation 1: Introduction to the principles of multi-layered perceptrons and error-based learning; reading Rumelhart et al. (1986). |
Week 8 |
4/8 |
Backpropagation 2: Simple multi-layered perceptron to instantiate error-based learning, non-linear input-output mappings. |
Week 9 |
4/15 |
Backpropagation 3: Evaluating and describing the multi-layered perceptron model. [Assignment 2] |
Week 10 |
4/22 |
Unsupervised learning 1: Introduction to the principles of functional self-organization and convolution in V1 orientation selectivity; reading Von der Malsburg (1973). |
Week 11 |
4/29 |
Unsupervised learning 2: Unpacking the neural network model in Von der Malsburg (1973). |
Week 12 |
5/6 |
Unsupervised Learning 3: Evaluating the Von der Malsburg model. [Assignment 3] |
Week 13 |
5/13 |
Recurrent neural networks; reading Aggarwal (2018), Ch. 2. |
Week 14 |
5/20 |
Convolutional neural networks; reading Aggarwal (2018), Ch. 3. |
Week 15 |
5/27 |
Exploding and diminishing gradients, overfitting, regularization. |
Week 16 |
6/3 |
Dragon Boat Festival (no class). |